The Riemann zeta function plays a central role in abstract mathematics due to its relationship with prime numbers. It also appears, perhaps somewhat surprisingly, in more practical applications. This presentation will show its prominence, as well as that of a related function, in certain representations of trigonometric functions as infinite expansions.
The zeta function is defined for an argument with real part greater than unity by the series
Historically the first connection between the zeta function and trigonometry occurred in the process of solving the Basel problem, which was to evaluate the infinite sum of inverse squared integers
which is . Euler did this by regarding the sine function as an infinite polynomial composed of factors for each of its roots at multiples of π, both positive and negative, as well as zero. That is, set
where the prime indicates zero is excluded from all other integers. When the right-hand product is expanded, the coefficient of the cubic term will consist of the sum of each inverse square. This sum can then be equated to the known coefficient for a power series expansion of sine to give
The coefficient of the quintic term will consist of the sum of distinct pairs of inverse squares. Again equating this to the known coefficient, one first has
Writing out the first few terms of each, the sum of distinct pairs is seen to be half that of nonequal pairs, which can in turn be written as the difference of two zeta function values. That is,
which then gives
Further values of the zeta function can be determined in the same manner, but the process is not particularly enlightening. It is much more interesting to look at other trigonometric functions!
Consider the derivative
When the logarithm is applied to the representation above of the sine as an infinite product, the factors separate into a linear series. That means
Combining terms symmetric in index about zero, this can also be written
which explains an expansion in a standard reference.
This function is composed entirely of simple poles: the expected one at the origin, plus an infinite series in either direction. This is reminiscent of the definition of the Weierstrass elliptic function, albeit with a different order to the poles.
To connect the first form to the usual power series expansion, first evaluate multiple derivatives of individual poles,
then construct the Taylor series about the origin of the sum of poles:
Only odd powers of m will lead to a nonzero result. Relabeling this index gives
which is a rather compact form for the series. Note that the coefficients of the expansion are proportional to values of the zeta function in a simple and regular way.
Comparing this expression to the known series for this function,
leads to a known relationship between values of the zeta function and Bernoulli numbers:
This derivation has no need of the functional equation for the Riemann zeta function. Presumably this is how Euler arrived at the relationship.
Using the product representation of the sine, one can represent the cosecant function as
The product runs over all integers, so that relabeling the index gives
Since the denominator here consists of linear terms, the entire expression can be expanded in a series of simple poles. Attempting a standard partial fraction decomposition for an infinite product appears futile, until one remembers that the product is a complex function and must have the same pole structure as its series expansion.
The residues at each pole are found by removing the corresponding zero denominator from the total product and evaluating the remainder. At the origin the product is unity because all numerators and denominators are equal. This gives immediately
The residues at multiples of π are a bit trickier. The overall factor of π in numerators and denominators will cancel, so that the remainder can be at most a rational function of integers. In order to arrive at a consistent result that does not depend on order of cancellation of factors, limit the upper and lower bounds of the product. For , the residue with limited bounds is
Since there are equal numbers of factors in both numerator and denominator, as the fractional factor here becomes unity. All that remains is the alternating sign, so that
which differs from the series for the cotangent only in the alternating sign. Like the cotangent series, this function is composed entirely of simple poles.
Again combining terms symmetric in index about zero, this can also be written
which appears in the standard reference.
To connect the first form to the usual power series expansion, first define an alternating zeta function due to Dirichlet:
Subtracting this function from the Riemann zeta function leaves only terms with double the index. This provides a relationship between the two functions:
Now construct the Taylor series about the origin of the sum of poles as above:
Again only odd powers of m will lead to a nonzero result. Relabeling this index gives
which differs from the cotangent expansion in having the zeta function replaced by the eta function, along with a change of sign before the summation.
Comparing this expression to the known series for this function,
gives a relationship between values of the eta function and Bernoulli numbers,
which is easily seen to be consistent with the previous such relationship knowing how the eta function relates to the zeta function.
An expansion for the square of the cosecant is easily obtained knowing the derivative of the cotangent,
which appears in the standard reference without explicit separation of the pole at the origin. Since the square of the cotangent is linearly related to this square, one has immediately
Taking a derivative of the expansion for cosecant gives an expansion for the product
These expansions are similar to that of the Weierstrass elliptic function with its poles of order two.
Comparing these expansions with the individual function expansions, and including the pole at unity in the summation for compactness, one must necessarily have
The first and third single summations are patently equal to the double summations when . To understand how the remaining terms in the first summation cancel each other, decompose the product
and with and look at the expression
The inner summation is telescoping: the first term inside the bracket will be cancelled by the second term displaced m steps away. This inner summation is thus conditionally zero for every m.
The extraneous terms of the third double summation contain an extra alternating sign:
For m even the inner summation is again conditionally zero for the same reason. This happens because the alternating sign has the same period as the steps in m. When this quantity is odd there is doubling instead of cancelling, so at first it appears there still remain the terms
but now the sign of m leads the inner summation to cancel in pairs on either side of zero.
The problem is that for the second double summation, one has the expression
which at first sight appears to be zero as well. This cannot be the case, since there is an extra constant term in the second equality. The previous inner summations have been labeled conditionally convergent in anticipation of this discrepancy.
The situation can be rationalized by knowing that while a complex function must have the same pole structure however expressed, there may also be a portion without poles. In the case at hand this extra portion is a constant that arises from the conditional convergence and needs to be evaluated explicitly.
The extraneous terms of the second double expansion are of the form
where indices equal to zero have been explicitly separated. Combining pairs on either side of zero, the last two partial summations are
This expression is finite at the origin and equal to there.
With respect to the first partial summation, consider a related summation without any terms excluded. Again evaluating the summation at the origin, integers occurring in denominators will either have the same or opposite signs. This gives the simple result
which indicates that the desired summation with equal indices excluded is simply the negative of that with equal indices:
Putting pieices together, the value of extraneous terms in the second double summation at the origin is
which makes perfect sense: this constant needs to balance the explicit constant appearing in the equality.
The same process can be applied to the first and third double summations, taking account of alternating signs. Since , the reasoning regarding the first partial summation is unaffected. In the final stage of each partial summation, a zeta function is replaced by the negative of an eta function when there is only one index on the alternating sign.
With , the value of extraneous terms in the first double summation at the origin is
and the value of extraneous terms in the third double summation at the origin is
These values are consistent with the equalities above under consideration. They also help to explain why a naïve evaluation of the summations avoids the issue of conditional convergence.
To determine a power series expansion of the square of the cosecant about the origin, first evaluate multiple derivatives of individual double poles,
then construct the Taylor series about the origin of the sum of poles:
In this case only even powers of m will lead to a nonzero result. Relabeling this index gives
The corresponding series for the product of cosecant and cotangent differs by an alternating sign,
Again only even powers of m will lead to a nonzero result. Relabeling this index gives
which again differs from the previous expansion in having the zeta function replaced by the eta function, along with a change of sign before the summation.
Products of the expansions for single functions will be equal to the expansions just constructed. One must necessarily have
The equivalences of these expressions implies relationships among values of the zeta and eta functions. Since and , these can be written more compactly as
Expanding and equating coefficients on both sides of each expression gives
These relationships are not recursion relations, since the argument of functions is of the same order on both sides of each equation. Just a final curiosity of the appearance of zeta functions in trigonometry...
Uploaded 2024.01.11 analyticphysics.com